17 research outputs found
Few Shot Rationale Generation using Self-Training with Dual Teachers
Self-rationalizing models that also generate a free-text explanation for
their predicted labels are an important tool to build trustworthy AI
applications. Since generating explanations for annotated labels is a laborious
and costly pro cess, recent models rely on large pretrained language models
(PLMs) as their backbone and few-shot learning. In this work we explore a
self-training approach leveraging both labeled and unlabeled data to further
improve few-shot models, under the assumption that neither human written
rationales nor annotated task labels are available at scale. We introduce a
novel dual-teacher learning framework, which learns two specialized teacher
models for task prediction and rationalization using self-training and distills
their knowledge into a multi-tasking student model that can jointly generate
the task label and rationale. Furthermore, we formulate a new loss function,
Masked Label Regularization (MLR) which promotes explanations to be strongly
conditioned on predicted labels. Evaluation on three public datasets
demonstrate that the proposed methods are effective in modeling task labels and
generating faithful rationales.Comment: ACL Findings 202
Demo: Smartwatch based shopping gesture recognition
Ministry of Education, Singapore under its Academic Research Funding Tier; National Research Foundation (NRF) Singapore under IDM Futures Funding Initiativ